Goto

Collaborating Authors

 automatic differentiation


One-step differentiation of iterative algorithms

Neural Information Processing Systems

For iterative algorithms, implicit differentiation alleviates this issue but requires custom implementation of Jacobian evaluation. In this paper, we study one-step differentiation, also known as Jacobian-free backpropagation, a method as easy as automatic differentiation and as efficient as implicit differentiation for fast algorithms (e.g., superlinear






bbc9d480a8257889d2af88983e8b126a-Paper-Conference.pdf

Neural Information Processing Systems

While existing automatic differentiation (AD) frameworks allow flexibly composing model architectures, theydonotprovide thesame flexibility forcomposing learning algorithms--everything has to be implemented in terms of backpropagation.



EfficientLearningofGenerativeModelsvia Finite-DifferenceScoreMatching

Neural Information Processing Systems

Several machine learning applications involve the optimization of higher-order derivatives(e.g., gradients ofgradients) during training, which can beexpensive with respect to memory and computation even with automatic differentiation.



70afbf2259b4449d8ae1429e054df1b1-Paper.pdf

Neural Information Processing Systems

This approach allows for formal subdifferentiation: forinstance, replacing derivativesbyClarkeJacobians in the usual differentiation formulas is fully justified for a wide class of nonsmooth problems.